9 research outputs found
Motion capture based on RGBD data from multiple sensors for avatar animation
With recent advances in technology and emergence of affordable RGB-D sensors for a
wider range of users, markerless motion capture has become an active field of research
both in computer vision and computer graphics.
In this thesis, we designed a POC (Proof of Concept) for a new tool that enables us
to perform motion capture by using a variable number of commodity RGB-D sensors of
different brands and technical specifications on constraint-less layout environments. The
main goal of this work is to provide a tool with motion capture capabilities by using a
handful of RGB-D sensors, without imposing strong requirements in terms of lighting,
background or extension of the motion capture area. Of course, the number of RGB-D
sensors needed is inversely proportional to their resolution, and directly proportional to
the size of the area to track to.
Built on top of the OpenNI 2 library, we made this POC compatible with most of the nonhigh-end
RGB-D sensors currently available in the market. Due to the lack of resources on
a single computer, in order to support more than a couple of sensors working simultaneously,
we need a setup composed of multiple computers. In order to keep data coherency
and synchronization across sensors and computers, our tool makes use of a semi-automatic
calibration method and a message-oriented network protocol.
From color and depth data given by a sensor, we can also obtain a 3D pointcloud representation
of the environment. By combining pointclouds from multiple sensors, we can
collect a complete and animated 3D pointcloud that can be visualized from any viewpoint.
Given a 3D avatar model and its corresponding attached skeleton, we can use an
iterative optimization method (e.g. Simplex) to find a fit between each pointcloud frame
and a skeleton configuration, resulting in 3D avatar animation when using such skeleton
configurations as key frames
Sistema d'interacció basat en Kinect per a realitat virtual
Aquest projecte se centra en l'ús de la càmera Microsoft Kinect per tal de construir una aplicació que utilitzi les dades proporcionades per la càmera i permeti d'una manera senzilla construir interfícies per interactuar amb un entorn de realitat virtual
A multi-projector CAVE system with commodity hardware and gesture-based interaction
Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever
combination of skeletal data from multiple Kinect sensors.Preprin
Choosing the right cell line for rectal cancer research
Up to date no effective method exists that predicts
response to preoperative chemoradiation (CRT) in
locally advanced rectal cancer (LARC). Nevertheless,
identification of patients who have a higher likelihood
of responding to preoperative CRT could be crucial in
decreasing treatment morbidity and avoiding expensive
and time-consuming treatments. Using the Gng4, c-Myc,
Pola1, and Rrm1 signature, we were able to establish
a model to predict response to CRT in rectal cancer
with a sensitivity of 60% and 100% specificity. The aim
of this study was to characterize c-Myc status in DNA,
RNA and protein levels in 3 tumoral cell lines (SW480,
SW620 and SW837) to establish the best cell line model
and, subsequently, carry out genome silencing of
c-Myc by means of RNA interference (iRNA). To study
the expression levels of c-Myc, we used Polymerase
Chain Reaction (PCR) amplifications and sequencing;
quantitative real time PCR (qRT-PCR); and western blot
analysis in each cell line. SW480 and SW620 showed a
variation A > G in exon 2, which caused a substitution
of aspargine to serine, and SW837 revealed a G > A
transition in the same, which caused a mutation at
codon 92. The three cell lines expressed c-Myc mRNA.
SW837 showed a decrease of c-Myc expression levels
compared with SW480, and SW620. At protein level,
SW620 showed the highest expression of c-Myc.
According to the results obtained, we can perform
c-Myc gene silencing experiments to analyze the
role of this biomarker in response to treatment
Sistema d'interacció basat en Kinect per a realitat virtual
Aquest projecte se centra en l'ús de la càmera Microsoft Kinect per tal de construir una aplicació que utilitzi les dades proporcionades per la càmera i permeti d'una manera senzilla construir interfícies per interactuar amb un entorn de realitat virtual
Sistema d'interacció basat en Kinect per a realitat virtual
Aquest projecte se centra en l'ús de la càmera Microsoft Kinect per tal de construir una aplicació que utilitzi les dades proporcionades per la càmera i permeti d'una manera senzilla construir interfícies per interactuar amb un entorn de realitat virtual
Motion capture based on RGBD data from multiple sensors for avatar animation
With recent advances in technology and emergence of affordable RGB-D sensors for a
wider range of users, markerless motion capture has become an active field of research
both in computer vision and computer graphics.
In this thesis, we designed a POC (Proof of Concept) for a new tool that enables us
to perform motion capture by using a variable number of commodity RGB-D sensors of
different brands and technical specifications on constraint-less layout environments. The
main goal of this work is to provide a tool with motion capture capabilities by using a
handful of RGB-D sensors, without imposing strong requirements in terms of lighting,
background or extension of the motion capture area. Of course, the number of RGB-D
sensors needed is inversely proportional to their resolution, and directly proportional to
the size of the area to track to.
Built on top of the OpenNI 2 library, we made this POC compatible with most of the nonhigh-end
RGB-D sensors currently available in the market. Due to the lack of resources on
a single computer, in order to support more than a couple of sensors working simultaneously,
we need a setup composed of multiple computers. In order to keep data coherency
and synchronization across sensors and computers, our tool makes use of a semi-automatic
calibration method and a message-oriented network protocol.
From color and depth data given by a sensor, we can also obtain a 3D pointcloud representation
of the environment. By combining pointclouds from multiple sensors, we can
collect a complete and animated 3D pointcloud that can be visualized from any viewpoint.
Given a 3D avatar model and its corresponding attached skeleton, we can use an
iterative optimization method (e.g. Simplex) to find a fit between each pointcloud frame
and a skeleton configuration, resulting in 3D avatar animation when using such skeleton
configurations as key frames
Posttranslational processing of GTP-binding proteins
SIGLEAvailable from British Library Document Supply Centre- DSC:DX171922 / BLDSC - British Library Document Supply CentreGBUnited Kingdo
A multi-projector CAVE system with commodity hardware and gesture-based interaction
Spatially-immersive systems such as CAVEs provide users with surrounding worlds by projecting 3D models on multiple screens around the viewer. Compared to alternative immersive systems such as HMDs, CAVE systems are a powerful tool for collaborative inspection of virtual environments due to better use of peripheral vision, less sensitivity to tracking errors, and higher communication possibilities among users. Unfortunately, traditional CAVE setups require sophisticated equipment including stereo-ready projectors and tracking systems with high acquisition and maintenance costs. In this paper we present the design and construction of a passive-stereo, four-wall CAVE system based on commodity hardware. Our system works with any mix of a wide range of projector models that can be replaced independently at any time, and achieves high resolution and brightness at a minimum cost. The key ingredients of our CAVE are a self-calibration approach that guarantees continuity across the screen, as well as a gesture-based interaction approach based on a clever
combination of skeletal data from multiple Kinect sensors